This is the notebook for the Module 5 Programming Assignment.
Here are a few tips for using the iPython HTML notebook:
You should rename this notebook to be <your JHED id>_PR5.ipynb Do it right now.
Make certain the entire notebook executes before you submit it. (See Hint #5 above)
Change the following variables:
In [1]:
name = "Shehzad Qureshi"
jhed_id = "squresh6"
if name == "Student Name" or jhed_id == "sname1":
raise Exception( "Change the name and/or JHED ID...preferrably to yours.")
Add whatever additional imports you require here. Stick with the standard libraries and those required by the class. The import gives you access to these functions: http://ipython.org/ipython-doc/stable/api/generated/IPython.core.display.html (Copy this link) Which, among other things, will permit you to display HTML as the result of evaluated code (see HTML() or display_html()).
In [2]:
from IPython.core.display import *
from StringIO import StringIO
import numpy as np
from operator import itemgetter
For this assignment, you are going to implement both hill-climbing and a genetic algorithm to solve a problem.
The problem is to generate the phrase "methinks it is like a weasel" which is from Hamlet. The particular phrase was made popular in Richard Dawkin's book The Blind Watchmaker.
You will need to think of an evaluation function that compares two strings and returns a score that indicates how different the two strings. It would return zero if the two strings are identical. You may find the ord() function helpful. It turns the ASCII code for a single string character:
In [3]:
ord( " ")
Out[3]:
In [4]:
chr(97)
Out[4]:
For hill-climbing, you will need to create a successor function. Because the state space is so large (for a given string, the successor function should return every possible single letter permutation of the current string), you will want to use a stochastic successor function that simply returns one of those possibilities at random.
For the genetic algorithm, you will need a function to generate a random individual, you will need a crossover operator, and a mutation operator. For this assignment, you can use tournament selection (remember that tournament selection involves picking seven individuals at random and returning the best one) because the fitness function needs to be minimized.
Finally, both of your implementations should work with a string of any length (not just the given target).
Hill Climbing
We will implement a schotastic version of hill climbing. We'll start by defining some variables and functions for logging.
In [5]:
debug = False
verbose = True
def log(msg):
if debug: print msg
def verbose(msg):
if verbose: print msg
Now we define our variable for allowed ASCII characters. We could use ord() as shown above but that wasn't necessary. This method will put each allowable character as an element in a list.
In [6]:
key = [x for x in "abcdefghijklmnopqrstuvwxyz "]
We need a way to generate an initial state for hill climbing. Given the length of the string we can pick random characters from our key above to form our initial state.
In [7]:
def generate_initial_state(strlen):
init = [np.random.choice(key) for i in xrange(strlen)]
init = "".join(init)
log("Generated initial state: " + init)
return init
Hill climbing requires a successor function which generates all successive states of the state under test. Implemented instead is a stochastic version which does a single, single shift. This means it randomly picks a position in the state and shifts left or right randomly. This, apparently, will demonstrate hill climbing unlike the original function which generates all successor states that performs the "climb" only once.
In [8]:
def generate_random_successor(value):
pos = np.random.randint(0, len(value))
c = int(np.random.choice([-1, 1]))
tmp = list(value)
tmp[pos] = key[(key.index(tmp[pos]) + c) % len(key)]
tmp = "".join(tmp)
log("Successor: " + tmp)
return tmp
We'll need a way to evaluate our string function using math so this will convert each allowable character into its index in the key. Example: 'a' = 0, z = '25'. The value returned is an array of numbers on which mathematical operations will be easier.
In [9]:
def string_to_array(val):
return np.array([key.index(x) for x in val])
The fitness function evaluates the state by comparing it to the target state. This function will compute the fitness of the candidate as such:
We do two comparisons because it is entirely possible to loop from the end of the alphabet to the beginning. "xyz" should have a small manhattan distance to "xya".
In [10]:
def fitness(current, target):
c, t = string_to_array(current), string_to_array(target)
fitness = np.minimum(np.absolute(c - t), np.absolute(t - c))
return np.average(fitness)
Our hill climbing function uses a stochastic method instead of generating all successor states. This will demonstrate proper hill climbing since it needs to evaluate each single successor and compare its fitness with the current fitness. A climb is done if and only if the new fitness value returned is better. This function will return only when then initial state evolves into the target.
In [11]:
def hillclimb(target):
current = generate_initial_state(len(target))
verbose(current)
value = fitness(current, target)
itercount = 0
while value != 0.0:
candidate = generate_random_successor(current)
candidate_value = fitness(candidate, target)
if candidate_value < value:
current, value = candidate, candidate_value
itercount = itercount + 1
verbose("Reached target in {0} iterations".format(itercount))
return current
Some simple testing for hill climbing:
In [12]:
hillclimb("methinks it is like a weasel")
Out[12]:
In [13]:
hillclimb("i thought i thaw a putty cat")
Out[13]:
In [14]:
hillclimb("beam me up scotty")
Out[14]:
In [15]:
hillclimb("artificial intelligence is fun")
Out[15]:
Genetic Algorithm
The implemented genetic algorithm will use tournament selection when choosing parents from the population pool. We shall also implement some randomness when reproducing and mutating. The variables below define our probabilities and tournament selection size.
In [16]:
Pc = 0.9
Pm = 0.05
Tsize = 7
Next we need to generate our initial population. Given the size and the length of each population member we can call our generator from hill climbing to generate the entire population.
In [17]:
def generate_initial_population(population_size, strlen):
log("Generating initial population of size {0} and length {1}".format(population_size, strlen))
population = [generate_initial_state(strlen) for x in xrange(population_size)]
log(population)
return population
Similarly, we can use the fitness function from hill climbing when evaluating the population as well. Since this is a minimizing problem our fitness result shall be evaluated as 1 / 1 + f(n).
In [18]:
def evaluate_population(population, target):
log("Evaluating population")
result = [(p, 1 / (1 + fitness(p, target))) for p in population]
log(result)
return result
Picking a parent will require tournament selection. Instead of using the entire population space we will select Tsize members of the population and pick the parent with the highest fitness value. The downside to picking one parent at a time is the possibility of picking the same parent consecutively, which is not necessarily a bad thing.
In [19]:
def pick_parent(population):
tournament = np.random.choice(len(population), Tsize, replace=False)
entries = [population[i] for i in tournament]
parent = sorted(entries, key=itemgetter(1)).pop()[0]
log("Picked parent " + parent)
return parent
Our mutation algorithm will select a random element from the state and change it into another random element. Unlike hill climbing which used single, single shifts, this is a single, random shift.
In [20]:
def mutate(child):
pos = np.random.randint(0, len(child))
c = [x for x in child]
c[pos] = np.random.choice(key)
log("Mutated child: {0}".format("".join(c)))
return "".join(c)
The reproduction function will take two parents and produce two offsprings. There are two probabilities involved:
The crossover function is simply a random position in the state.
In [21]:
def reproduce(parent1, parent2):
if np.random.rand() >= Pc: return parent1, parent2
crossover = np.random.randint(1, len(parent1))
child1 = parent1[:crossover] + parent2[crossover:]
child2 = parent1[crossover:] + parent2[:crossover]
if np.random.rand() < Pm:
child1, child2 = mutate(child1), mutate(child2)
log("Reproducing {0} and {1} and got {2}, {3}".format(parent1, parent2, child1, child2))
return child1, child2
The genetic algorithm will generate an initial population and run for a predefined number of iterations (generations) and outputs the member of the final population that is most like the target. An extra check was added to stop if the target has been reached.
In [22]:
def genetic_algorithm(target, population_size=50, generation_limit=500):
population, generation = generate_initial_population(population_size, len(target)), 0
while generation < generation_limit:
if target in population: break
result, new_population = evaluate_population(population, target), list()
for i in xrange(population_size / 2):
parent1, parent2 = pick_parent(result), pick_parent(result)
child1, child2 = reproduce(parent1, parent2)
new_population.extend([child1, child2])
population, generation = new_population, generation + 1
if target in population: verbose("Reached target in {0} generations".format(generation))
final_population = sorted(evaluate_population(population, target), key=itemgetter(1), reverse=True)
return final_population[0][0]
Some testing for the genetic algorithm
In [23]:
genetic_algorithm("methinks its like a weasel", population_size=200, generation_limit=1000)
Out[23]:
In [24]:
genetic_algorithm("i thought i thaw a putty cat", population_size=200, generation_limit=1000)
Out[24]:
In [25]:
genetic_algorithm("beam me up scotty", population_size=150, generation_limit=1000)
Out[25]:
In [26]:
genetic_algorithm("artificial intelligence is fun", population_size=200, generation_limit=1000)
Out[26]:
Hill climbing seems like a pretty straightforward algorithm, especially when using schotastic successor generations. As long as our fitness function is well defined the algorithm will always reach its target. The one downside to the algorithm implemented above is that generating single random successor states can result in generating a state that moves away from the target state instead of closer.
It should be possible to make schotastic successor generation smarter. For instance, we can compare each element of the state and determine which ones actually need to be changed. We can modify the probability distribution to match the fitness curve generated in order to generate better successors. The problem though is that we're slowly moving away from schotastic generation if we do. I feel like there is a balance between the two approaches though.
Genetic algorithm, on the other hand, is pretty fascinating. It largely depends on these variables:
The probablites, sizes and limits should be self-explanatory. Higher values will always lead to better results. There is probably a tradeoff between iteration limit and population size although I would weigh heavier on population size first.
The crossover function is one that needs special attenation. The algorithm implemented above used a random crossover position to generate the offspring but I feel that this could be done better. For instance, would it not be better to generate a crossover value that takes into account the fitness of the state? Rather than be random, we could select a position around a group of elements that have a high fitness value. This would require calculating which group of elements have the better fitness function; not only that, what comprises "better"? This comparison is O(n^2) which can be very expensive.
In [26]: